Workshop Labs 2026-03-13-1
Open Weights isn't Open Training
Six compounding bugs across PyTorch → CUDA → accelerate → transformers → PEFT → compressed_tensors to LoRA-tune a 1T MoE — and even then, expert weights don't train. The article is a first-person case study for why "open weights" without training enablement is a weaker form of openness than the narrative suggests. But Workshop Labs sells training infra and benchmarks against Tinker (Thinking Machines) without disclosing any relationship — the pain they document is the demand they intend to capture.